9 research outputs found

    Vehicle make and model recognition in CCTV footage

    Get PDF
    This paper presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD (coherent Point Drift) is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR (Vehicle Make and Model Recognition) task and may capture vehicles at different approaching angles. Also a novel ROI (Region Of Interest) segmentation is proposed. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximize the reliability of the fnal outcome. Experimental results are provided to prove that the proposed system demonstrates accuracy over 95% when tested in real CCTV footage with no prior camera calibration

    Multiexposure and multifocus image fusion with multidimensional camera shake compensation

    Get PDF
    Multiexposure image fusion algorithms are used for enhancing the perceptual quality of an image captured by sensors of limited dynamic range. This is achieved by rendering a single scene based on multiple images captured at different exposure times. Similarly, multifocus image fusion is used when the limited depth of focus on a selected focus setting of a camera results in parts of an image being out of focus. The solution adopted is to fuse together a number of multifocus images to create an image that is focused throughout. A single algorithm that can perform both multifocus and multiexposure image fusion is proposed. This algorithm is a new approach in which a set of unregistered multiexposure focus images is first registered before being fused to compensate for the possible presence of camera shake. The registration of images is done via identifying matching key-points in constituent images using scale invariant feature transforms. The random sample consensus algorithm is used to identify inliers of SIFT key-points removing outliers that can cause errors in the registration process. Finally, the coherent point drift algorithm is used to register the images, preparing them to be fused in the subsequent fusion stage. For the fusion of images, a new approach based on an improved version of a wavelet-based contourlet transform is used. The experimental results and the detailed analysis presented prove that the proposed algorithm is capable of producing high-dynamic range (HDR) or multifocus images by registering and fusing a set of multiexposure or multifocus images taken in the presence of camera shake. Further,comparison of the performance of the proposed algorithm with a number of state-of-the art algorithms and commercial software packages is provided. In particular, our literature review has revealed that this is one of the first attempts where the compensation of camera shake, a very likely practical problem that can result in HDR image capture using handheld devices, has been addressed as a part of a multifocus and multiexposure image enhancement system. © 2013 Society of Photo-Optical Instrumentatio Engineers (SPIE)

    The application of machine learning in multi sensor data fusion for activity recognition in mobile device space

    Get PDF
    The present generation of mobile handheld devices comes equipped with a large number of sensors. The key sensors include the Ambient Light Sensor, Proximity Sensor, Gyroscope, Compass and the Accelerometer. Many mobile applications are driven based on the readings obtained from either one or two of these sensors. However the presence of multiple-sensors will enable the determination of more detailed activities that are carried out by the user of a mobile device, thus enabling smarter mobile applications to be developed that responds more appropriately to user behavior and device usage. In the proposed research we use recent advances in machine learning to fuse together the data obtained from all key sensors of a mobile device. We investigate the possible use of single and ensemble classifier based approaches to identify a mobile device’s behavior in the space it is present. Feature selection algorithms are used to remove non-discriminant features that often lead to poor classifier performance. As the sensor readings are noisy and include a significant proportion of missing values and outliers, we use machine learning based approaches to clean the raw data obtained from the sensors, before use. Based on selected practical case studies, we demonstrate the ability to accurately recognize device behavior based on multi-sensor data fusion

    Real-time speaker identification for video conferencing

    Get PDF
    Automatic speaker identification in a videoconferencing environment will allow conference attendees to focus their attention on the conference rather than having to be engaged manually in identifying which channel is active and who may be the speaker within that channel. In this work we present a real-time, audio-coupled video based approach to address this problem, but focus more on the video analysis side. The system is driven by the need for detecting a talking human via the use of computer vision algorithms. The initial stage consists of a face detector which is subsequently followed by a lip-localization algorithm that segments the lip region. A novel approach for lip movement detection based on image registration and using the Coherent Point Drift (CPD) algorithm is proposed. Coherent Point Drift (CPD) is a technique for rigid and non-rigid registration of point sets. We provide experimental results to analyse the performance of the algorithm when used in monitoring real life videoconferencing data

    Illumination modelling of a mobile device environment for effective use in driving mobile apps

    Get PDF
    The present generation of Ambient Light Sensors (ALS) of a mobile handheld device suffer from two practical shortcomings. The ALSs are narrow angle, i.e. they respond effectively only within a narrow angle of operation and there is a latency of operation. As a result mobile applications that operate based on the ALS readings could perform sub-optimally especially when operated in environments with non-uniform illumination. The applications will either adopt with unacceptable levels of latency or/and may demonstrate a discrete nature of operation. In this paper we propose a framework to predict the ambient illumination of an environment in which a mobile device is present. The predictions are based on an illumination model that is developed based on a small number of readings taken during an application calibration stage. We use a machine learning based approach in developing the models. Five different regression models were developed, implemented and compared based on Polynomial, Gaussian, Sum of Sine, Fourier and Smoothing Spline functions. Approaches to remove noisy data, missing values and outliers were used prior to the modelling stage to remove their negative effects on modelling. The prediction accuracy for all models were found to be above 0.99 when measured using R-Squared test with the best performance being from Smoothing Spline. In this paper we will discuss mathematical complexity of each model and investigate how to make compromises in finding the best mode

    Use of artificial intelligence to improve resilience and preparedness against adverse flood events

    Get PDF
    The main focus of this paper is the novel use of Artificial Intelligence (AI) in natural disaster, more specifically flooding, to improve flood resilience and preparedness. Different types of flood have varying consequences and are followed by a specific pattern. For example, a flash flood can be a result of snow or ice melt and can occur in specific geographic places and certain season. The motivation behind this research has been raised from the Building Resilience into Risk Management (BRIM) project, looking at resilience in water systems. This research uses the application of the state-of-the-art techniques i.e., AI, more specifically Machin Learning (ML) approaches on big data, collected from previous flood events to learn from the past to extract patterns and information and understand flood behaviours in order to improve resilience, prevent damage, and save lives. In this paper, various ML models have been developed and evaluated for classifying floods, i.e., flash flood, lakeshore flood, etc. using current information i.e., weather forecast in different locations. The analytical results show that the Random Forest technique provides the highest accuracy of classification, followed by J48 decision tree and Lazy methods. The classification results can lead to better decision-making on what measures can be taken for prevention and preparedness and thus improve flood resilience

    Vehicle Make and Model Recognition in CCTV footage

    No full text
    This paper presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD (coherent Point Drift) is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR (Vehicle Make and Model Recognition) task and may capture vehicles at different approaching angles. Also a novel ROI (Region Of Interest) segmentation is proposed. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximize the reliability of the fnal outcome. Experimental results are provided to prove that the proposed system demonstrates accuracy over 95% when tested in real CCTV footage with no prior camera calibration

    Supplementary Information for "A Model-Based Engineering Methodology and Architecture for Resilience in Systems-of-Systems: A Case of Water Supply Resilience to Flooding."

    No full text
    Supplementary Information for "A Model-Based Engineering Methodology and Architecture for Resilience in Systems-of-Systems: A Case of Water Supply Resilience to Flooding."Abstract:There is a clear and evident requirement for a conscious effort to be made towards a resilient water system-of-systems (SoS) within the UK, in terms of both supply and flooding. The impact of flooding goes beyond the immediately obvious socio-aspects of disruption, cascading and affecting a wide range of connected systems. The issues caused by flooding need to be treated in a fashion which adopts an SoS approach to evaluate the risks associated with interconnected systems and to assess resilience against flooding from various perspectives. Changes in climate result in deviations in frequency and intensity of precipitation; variations in annual patterns make planning and management for resilience more challenging. This article presents a verified model-based system engineering methodology for decision-makers in the water sector to holistically, and systematically implement resilience within the water context, specifically focusing on effects of flooding on water supply. A novel resilience viewpoint has been created which is solely focused on the resilience aspects of architecture that is presented within this paper. Systems architecture modelling forms the basis of the methodology and includes an innovative resilience viewpoint to help evaluate current SoS resilience, and to design for future resilient states. Architecting for resilience, and subsequently simulating designs, is seen as the solution to successfully ensuring system performance does not suffer, and systems continue to function at the desired levels of operability. The case study presented within this paper demonstrates the application of the SoS resilience methodology on water supply networks in times of flooding, highlighting how such a methodology can be used for approaching resilience in the water sector from an SoS perspective. The methodology highlights where resilience improvements are necessary and also provides a process where architecture solutions can be proposed and tested.</div

    Supplementary Information Files for "Use of Artificial Intelligence to Improve Resilience and Preparedness Against Adverse Flood Events"

    No full text
    Supplementary Information Files for "Use of Artificial Intelligence to Improve Resilience and Preparedness Against Adverse Flood Events"Abstract:The main focus of this paper is the novel use of Artificial Intelligence (AI) in natural disaster, more specifically flooding, to improve flood resilience and preparedness. Different types of flood have varying consequences and are followed by a specific pattern. For example, a flash flood can be a result of snow or ice melt and can occur in specific geographic places and certain season. The motivation behind this research has been raised from the Building Resilience into Risk Management (BRIM) project, looking at resilience in water systems. This research uses the application of the state-of-the-art techniques i.e., AI, more specifically Machin Learning (ML) approaches on big data, collected from previous flood events to learn from the past to extract patterns and information and understand flood behaviours in order to improve resilience, prevent damage, and save lives. In this paper, various ML models have been developed and evaluated for classifying floods, i.e., flash flood, lakeshore flood, etc. using current information i.e., weather forecast in different locations. The analytical results show that the Random Forest technique provides the highest accuracy of classification, followed by J48 decision tree and Lazy methods. The classification results can lead to better decision-making on what measures can be taken for prevention and preparedness and thus improve flood resilience</div
    corecore